251 research outputs found

    Metaphor as categorisation: a connectionist implementation

    Get PDF
    A key issue for models of metaphor comprehension is to explain how in some metaphorical comparison , only some features of B are transferred to A. The features of B that are transferred to A depend both on A and on B. This is the central thrust of Black's well known interaction theory of metaphor comprehension (1979). However, this theory is somewhat abstract, and it is not obvious how it may be implemented in terms of mental representations and processes. In this paper we describe a simple computational model of on-line metaphor comprehension which combines Black's interaction theory with the idea that metaphor comprehension is a type of categorisation process (Glucksberg & Keysar, 1990, 1993). The model is based on a distributed connectionist network depicting semantic memory (McClelland & Rumelhart, 1986). The network learns feature-based information about various concepts. A metaphor is comprehended by applying a representation of the first term A to the network storing knowledge of the second term B, in an attempt to categorise it as an exemplar of B. The output of this network is a representation of A transformed by the knowledge of B. We explain how this process embodies an interaction of knowledge between the two terms of the metaphor, how it accords with the contemporary theory of metaphor stating that comprehension for literal and metaphorical comparisons is carried out by identical mechanisms (Gibbs, 1994), and how it accounts for both existing empirical evidence (Glucksberg, McGlone, & Manfredi, 1997) and generates new predictions. In this model, the distinction between literal and metaphorical language is one of degree, not of kind

    Connectionism and psychological notions of similarity

    Get PDF
    Kitcher (1996) offers a critique of connectionism based on the belief that connectionist information processing relies inherently on metric similarity relations. Metric similarity measures are independent of the order of comparison (they are symmetrical) whereas human similarity judgments are asymmetrical. We answer this challenge by describing how connectionist systems naturally produce asymmetric similarity effects. Similarity is viewed as an implicit byproduct of information processing (in particular categorization) whereas the reporting of similarity judgments is a separate and explicit meta-cognitive process. The view of similarity as a process rather than the product of an explicit comparison is discussed in relation to the spatial, feature, and structural theories of similarity

    Mapping the origins of time: Scalar errors in infant time estimation

    Get PDF
    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and 14-month-olds' responses to the omission of a recurring target, on either a 3- or 5-s cycle. At all ages (a) both fixation and pupil dilation measures were time locked to the periodicity of the test interval, and (b) estimation errors grew linearly with the length of the interval, suggesting that trademark interval timing is in place from 4 months

    A connectionist account of the emergence of the literal-metaphorical-anomalous distinction in young children

    Get PDF
    We present the first developmental computational model of metaphor comprehension, which seeks to relate the emergence of a distinction between literal and non-literal similarity in young children to the development of semantic representations. The model gradually learns to distinguish literal from metaphorical semantic juxtapositions as it acquires more knowledge about the vehicle domain. In accordance with Keil (1986), the separation of literal from metaphorical comparisons is found to depend on the maturity of the vehicle concept stored within the network. The model generates a number of explicit novel predictions

    The time course of routine action

    Get PDF
    Previous studies of action selection in routinized tasks have used error rates as their sole dependent measure (e.g. Reason, 1979; Schwartz et al., 1998). Consequently, conclusions about the underlying mechanisms of correct behavior are necessarily indirect. The present experiment examines the performance of normal subjects in the prototypical coffee task (Botvinick & Plaut, 2004) when carried out in a virtual environment on screen. This has the advantage of (a) constraining the possible errors more tightly than a real world environment, and (b) giving access to latencies as an additional, finer grained measure of performance. We report error data and timing of action selection at the crucial branching points for the production of routinized task sequences both with and without a secondary task. Processing branching points leads to increased latencies. The presence of the secondary task has a greater effect on latencies at branching points than at equivalent non-branching points. Furthermore, error data and latencies dissociate, suggesting that the exact timing is a valid and valuable source of information when trying to understand the processes that govern routine tasks. The results of the experiment are discussed in relation to their implication for computational accounts of routine action selection

    “Are you looking at me?” How children’s gaze judgments improve with age

    Get PDF
    Adults’ judgments of another person’s gaze reflect both sensory (e.g., perceptual) and nonsensory (e.g., decisional) processes. We examined how children’s performance on a gaze categorization task develops over time by varying uncertainty in the stimulus presented to 6- to 11- year-olds (n = 57). We found that younger children responded “direct” over a wider range of gaze deviations. We also found that increasing uncertainty led to an increase in direct responses, across all age groups. A simple model to account for these data revealed that although younger children had a noisier sensory representation of the stimulus, most developmental changes in gaze were because of a change in children’s response criteria (category boundaries). These results suggest that although the core mechanisms for gaze processing are already in place by the age of 6, their development continues across the whole of childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved

    TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning

    Get PDF
    Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-nothing in nature. As chunks are learnt their component parts become more and more tightly bound together. TRACX2 successfully models the data from five experiments from the infant visual statistical learning literature, including tasks involving forward and backward transitional probabilities, low-salience embedded chunk items, part-sequences and illusory items. The model also captures performance differences across ages through the tuning of a single-learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain-general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning

    Generating Explanations from Deep Reinforcement Learning Using Episodic Memory

    Full text link
    Deep Reinforcement Learning (RL) involves the use of Deep Neural Networks (DNNs) to make sequential decisions in order to maximize reward. For many tasks the resulting sequence of actions produced by a Deep RL policy can be long and difficult to understand for humans. A crucial component of human explanations is selectivity, whereby only key decisions and causes are recounted. Imbuing Deep RL agents with such an ability would make their resulting policies easier to understand from a human perspective and generate a concise set of instructions to aid the learning of future agents. To this end we use a Deep RL agent with an episodic memory system to identify and recount key decisions during policy execution. We show that these decisions form a short, human readable explanation that can also be used to speed up the learning of naive Deep RL agents in an algorithm-independent manner

    A connectionist account of interference effects in early infant memory and categorization

    Get PDF
    An unusual asymmetry has been observed in natural category formation in infants (Quinn, Eimas, and Rosenkrantz, 1993). Infants who are in itially e xposed to a series of pictures of cats and then are shown a dog and a novel cat, show significantly more interest in the dog than in the cat. However, when the order of presentation is reversed — dogs are seen first, then a cat and a novel dog — the cat attracts no more attention than the dog. We show that a simple connectionist network can model this unexpected learning asymmetry and propose that this asymmetry arises naturally from the asymmetric overlaps of the feature distributions of the two categories. The values of the cat features are subsumed by those of dog features, but not vice-versa. The autoencoder used for the experiments presented in this paper also reproduces exclusivity effects in the two categories as well the reported effect of catastrophic interference of dogs on previously learned cats, but not vice-versa. The results of the modeling suggest connectionist methods are ideal for exploring early infant knowledge acquis ition

    Could category-specific semantic deficits reflect differences in the distribution of features within a unified semantic memory?

    Get PDF
    Book synopsis: This volume features the complete text of the material presented at the Twentieth Annual Conference of the Cognitive Science Society. As in previous years, the symposium included an interesting mixture of papers on many topics from researchers with diverse backgrounds and different goals, presenting a multifaceted view of cognitive science. This volume contains papers, posters, and summaries of symposia presented at the leading conference that brings cognitive scientists together to discuss issues of theoretical and applied concern. Submitted presentations are represented in these proceedings as "long papers" (those presented as spoken presentations and "full posters" at the conference) and "short papers" (those presented as "abstract posters" by members of the Cognitive Science Society)
    corecore